5 research outputs found
Single timescale regularized stochastic approximation schemes for monotone Nash games under uncertainty
Abstract—In this paper, we consider the distributed compu-tation of equilibria arising in monotone stochastic Nash games over continuous strategy sets. Such games arise in settings when the gradient map of the player objectives is a monotone mapping over the cartesian product of strategy sets, leading to a monotone stochastic variational inequality. We consider the application of projection-based stochastic approximation schemes. However, such techniques are characterized by a key shortcoming: they can accommodate strongly monotone mappings only. In fact, standard extensions of stochastic ap-proximation schemes for merely monotone mappings require the solution of a sequence of related strongly monotone prob-lems, a natively two-timescale scheme. Accordingly, we consider the development of single timescale techniques for computing equilibria when the associated gradient map does not admit strong monotonicity. We first show that, under suitable assump-tions, standard projection schemes can indeed be extended to allow for strict, rather than strong monotonicity. Furthermore, we introduce a class of regularized stochastic approximation schemes, in which the regularization parameter is updated at every step, leading to a single timescale method. The scheme is a stochastic extension of an iterative Tikhonov regularization method and its global convergence is established. To aid in networked implementations, we consider an extension to this result where players are allowed to choose their steplengths independently and show if the deviation across their choices is suitably constrained, then the convergence of the scheme may be claimed. I
Distributed multiuser optimization: Algorithms and error analysis
Abstract—We consider a class of multiuser optimization problems in which user interactions are seen through congestion cost functions or coupling constraints. Our primary emphasis lies on the convergence and error analysis of distributed algorithms in which users communicate through aggregate user information. Traditional implementations are reliant on strong convexity assumptions, require coordination across users in terms of consistent stepsizes, and often rule out early termination by a group of users. We consider how some of these assumptions can be weakened in the context of projection methods motivated by fixed-point formulations of the problem. Specifically, we focus on (approximate) primal and primal-dual projection algorithms. We analyze the convergence behavior of the methods and provide error bounds in settings with limited coordination across users and regimes where a group of users may prematurely terminate affecting the convergence point. I
Distributed algorithms for networked multi-agent systems: optimization and competition
This thesis pertains to the development of distributed algorithms in
the context of networked multi-agent systems. Such engineered systems may be
tasked with a variety of goals, ranging from the solution of
optimization problems to addressing the solution of variational
inequality problems.
Two key complicating characteristics of multi-agent systems are the
following: (i) the lack of availability of
system-wide information at any given location; and (ii) the
absence of any central coordinator. These intricacies make it infeasible to collect all the
information at a location and preclude the use of centralized
algorithms. Consequently, a fundamental question in the design of such systems
is the need for developing algorithms that can support their
functioning. Accordingly, our goal lies in developing distributed
algorithms that can be implemented at a local level while guaranteeing a
global system-level requirement. In such
techniques, each agent uses locally available information,
including that accessible from its immediate neighbors, to update
its decisions, rather than availing of the decisions of all
agents. This thesis focuses on multi-agent systems
tasked with the solution of three sets of problems: (i)
convex optimization problems; (ii) Cartesian
variational inequality problems; and (iii) a sub-class of Nash games.
In the first part of this thesis, we consider a multiuser
convex optimization
problem. Traditionally, a multiuser problem is a constrained
optimization problem characterized by a set of users (or agents). Such problems
are characterized by an objective given
by a sum of user-specific utility functions, and a collection of
separable constraints that couple user decisions. We assume that
user-specific utility information is private while users may communicate values
of their decision variables. The multiuser problem is to maximize the
sum of the users-specific cost functions subject to the coupling
constraints, while abiding by the informational requirements of each
user. In this part of the thesis, we focus on generalizations of convex multiuser
optimization problems where the objective and constraints are not
separable by user and instead consider instances where user decisions
are coupled, both in the objective and through nonlinear coupling
constraints. To solve this problem, we consider the application of
distributed gradient-based algorithms on an approximation of the
multiuser problem. Such an approximation is obtained through a
regularization and is equipped with bounds of the difference between
the optimal function values of the original problem and its regularized
counterpart. In the algorithmic development, we consider constant
stepsize primal-dual and dual schemes in which the iterate computations
are distributed naturally across the users, i.e., each user updates its
own decision only. We observe that a generalization of this result is
also available when users choose their stepsize and regularization
parameters independently from a prescribed range.
The second part of this thesis is devoted to the solution of a
Cartesian variational inequality (VI) problem. A Cartesian VI
provides a unifying framework for studying multi-agent systems
including regimes in which agents either cooperate or compete in a Nash game. Under suitable convexity assumptions, sufficiency
conditions of such problems can be cast as a Cartesian VI. We
consider a monotone stochastic Cartesian variational inequality
problem that naturally arise from convex optimization problems or a
subclass of Nash games over continuous strategy sets. Almost sure
convergence of standard
implementations of stochastic approximation rely on strong
monotonicity of the mappings arising in such variational inequality
problems. Our interest lies in weakening this requirement and
this motivates the development of
distributed iterative stochastic approximation algorithms.
We introduce two classes of stochastic approximation methods, each of
which requires exactly one projection step at every iteration, and
provide convergence analysis for them. Of these, the first is a
stochastic iterative Tikhonov regularization method which necessitates
the update of regularization parameter after every iteration. The second
method is a stochastic iterative proximal-point method, where the
centering term is updated after every iteration. Conditions are provided
for recovering global convergence in limited coordination extensions of
such schemes where agents are allowed to choose their stepsize
sequences, regularization and centering parameters independently, while
meeting a suitable coordination requirement. We apply the proposed
class of techniques and their limited coordination versions to a
stochastic networked rate allocation problem.
The focus of the third part of the thesis is on a class of games,
termed as aggregative games, being played over a
networked system. In an aggregative game, an agent's
objective function is coupled across agents through a function of the aggregate of
all agents decisions. Every agent maintains an estimate of the
aggregate and agents exchange this information over a connected
network. We study two classes of distributed algorithm for
information exchange and computation of equilibrium. The first
method, a diffusion-based algorithm, operates in a synchronous setting
which can contend with time-varying connectivity of the underlying
network graph model. The second method, a gossip-based distributed
algorithm, is inherently asynchronous and is applicable when the
network is static. Our primary emphasis is on proving the
convergence of these algorithms under an assumption of a diminishing
(agent-specific) stepsize sequence. Under standard conditions, we
establish the almost-sure convergence of these algorithms to an
equilibrium point. Moreover, we also develop and analyze the associated error bounds
when a constant stepsize (user-specific) is employed in the
gossip-based method. Finally, we present numerical results to assess
the performance of the diffusion and the gossip algorithm for a
class of aggregative games for various network models and sizes